Goto

Collaborating Authors

 public sector


Responsible AI Adoption in the Public Sector: A Data-Centric Taxonomy of AI Adoption Challenges

Nikiforova, Anastasija, Lnenicka, Martin, Melin, Ulf, Valle-Cruz, David, Gill, Asif, Flores, Cesar Casiano, Sirait, Emyana, Luterek, Mariusz, Dreyling, Richard Michael, Tesarova, Barbora

arXiv.org Artificial Intelligence

Despite Artificial Intelligence (AI) transformative potential for public sector services, decision-making, and administrative efficiency, adoption remains uneven due to complex technical, organizational, and institutional challenges. Responsible AI frameworks emphasize fairness, accountability, and transparency, aligning with principles of trustworthy AI and fair AI, yet remain largely aspirational, overlooking technical and institutional realities, especially foundational data and governance. This study addresses this gap by developing a taxonomy of data-related challenges to responsible AI adoption in government. Based on a systematic review of 43 studies and 21 expert evaluations, the taxonomy identifies 13 key challenges across technological, organizational, and environmental dimensions, including poor data quality, limited AI-ready infrastructure, weak governance, misalignment in human-AI decision-making, economic and environmental sustainability concerns. Annotated with institutional pressures, the taxonomy serves as a diagnostic tool to surface 'symptoms' of high-risk AI deployment and guides policymakers in building the institutional and data governance conditions necessary for responsible AI adoption.


UK government urged to offer more transparency over OpenAI deal

The Guardian

Ministers are facing calls for greater transparency about public data that may be shared with the US tech company OpenAI after the government signed a wide-ranging agreement with the 300m ( 222m) company that critics compared to letting a fox into a henhouse. Chi Onwurah, the chair of the House of Commons select committee on science, innovation and technology, warned that Monday's sweeping memorandum of understanding between OpenAI's chief executive, Sam Altman, and the technology secretary, Peter Kyle, was "very thin on detail" and called for guarantees that public data would remain in the UK and clarity about how much of it OpenAI would have access to. The deal paves the way for the Silicon Valley firm behind ChatGPT to explore deploying advanced AI technology in areas including justice, defence and security, and education. It includes OpenAI and the government "partnering to develop safeguards that protect the public and uphold democratic values". Kyle said he wanted Britain to be "front and centre when it comes to developing and deploying AI" and "this can't be achieved without companies like OpenAI".


UK government's deal with Google 'dangerously naive', say campaigners

The Guardian

Google has agreed a sweeping deal with the UK government to provide free technology to the public sector from the NHS to local councils– a move campaigners have called "dangerously naive". The US company will be asked to "upskill" tens of thousands of civil servants in technology, including in using artificial intelligence, as part of an agreement which will not require the government to pay. It is considered in Whitehall to be giving Google "a foot in the door" as the digitisation of public services accelerates. However, the agreement prompted concerns about the precariousness of UK public data being held on US servers amid the unpredictable leadership of Donald Trump. The Department of Science, Innovation and Technology (DSIT) said Google Cloud, which provides databases, machine learning and computing power, had "agreed to work with the UK government in helping public services use advanced tech to shake off decades old'ball and chain' legacy contracts which leave essential services vulnerable to cyber-attack". Google's services are considered more agile and efficient than traditional competitors, but there are concerns in Whitehall's digital circles about the government becoming locked into a new kind of dependency.


Government AI roll-outs threatened by outdated IT systems

The Guardian

The government's ambition to boost efficiency by embedding AI in all aspects of its work risks being undermined by out-of-date technology, poor quality data and a lack of skilled staff, an influential Commons committee has warned. The report by the cross-party public accounts committee (PAC) found that more than 20 government IT systems identified as "legacy", meaning out of date and unsupported, have yet to be given funding to improve them. Government research cited by the PAC in the report found that almost a third of central government IT systems met this definition in 2024. Keir Starmer's government has repeatedly stressed its desire to increase economic growth through the mass take-up of AI systems, including in the public sector. An official plan for the technology published in January called for the government to "rapidly pilot" AI-powered services, saying this would both increase productivity and improve people's experience of dealing with officialdom.


SAIF: A Comprehensive Framework for Evaluating the Risks of Generative AI in the Public Sector

Lee, Kyeongryul, Kim, Heehyeon, Whang, Joyce Jiyoung

arXiv.org Artificial Intelligence

The rapid adoption of generative AI in the public sector, encompassing diverse applications ranging from automated public assistance to welfare services and immigration processes, highlights its transformative potential while underscoring the pressing need for thorough risk assessments. Despite its growing presence, evaluations of risks associated with AI-driven systems in the public sector remain insufficiently explored. Building upon an established taxonomy of AI risks derived from diverse government policies and corporate guidelines, we investigate the critical risks posed by generative AI in the public sector while extending the scope to account for its multimodal capabilities. In addition, we propose a Systematic dAta generatIon Framework for evaluating the risks of generative AI (SAIF). SAIF involves four key stages: breaking down risks, designing scenarios, applying jailbreak methods, and exploring prompt types. It ensures the systematic and consistent generation of prompt data, facilitating a comprehensive evaluation while providing a solid foundation for mitigating the risks. Furthermore, SAIF is designed to accommodate emerging jailbreak methods and evolving prompt types, thereby enabling effective responses to unforeseen risk scenarios. We believe that this study can play a crucial role in fostering the safe and responsible integration of generative AI into the public sector.


Why Starmer and Reeves are pinning their hopes on AI to drive growth in UK

The Guardian

The spectre of last week's bond market sell-off hangs over the government's artificial intelligence strategy. Investors, and voters, want to know where the growth is in the UK economy. Keir Starmer and the chancellor, Rachel Reeves, believes AI is a significant part of the answer. The UK has considerable strengths in AI, which can be loosely defined as computer systems performing tasks that typically require human intelligence (ranging from summarising a document to assessing a medical patient's symptoms and writing emails). Those strengths include the high-quality research and engineering talent coming out of UK universities and the fact that the country already hosts a number of leading AI companies, led by the UK-founded Google DeepMind.


Amazon-hosted AI tool for UK military recruitment 'carries risk of data breach'

The Guardian

An artificial intelligence tool hosted by Amazon and designed to boost UK Ministry of Defence recruitment puts defence personnel at risk of being identified publicly, according to a government assessment. Data used in the automated system to improve the drafting of defence job adverts and attract more diverse candidates by improving the inclusiveness language, includes names, roles and emails of military personnel and is stored by Amazon in the US. This means "a data breach may have concerning consequences, ie identification of defence personnel", according to documents detailing government AI systems published for the first time today. The risk has been judged to be "low" and the MoD said "robust safeguards" have been put in place by the suppliers, Textio, Amazon Web Services and Amazon GuardDuty, a threat detection service. But it is one of several risks acknowledged by the government about its use of AI tools in the public sector in a tranche of documents released to improve transparency about the central government's use of algorithms. Official declarations about how the algorithms work stress that mitigations and safeguards are in place to tackle risks, as ministers push to use AI to boost UK economic productivity and, in the words of the technology secretary, Peter Kyle, on Tuesday, "bring public services back from the brink".


UK government failing to list use of AI on mandatory register

The Guardian

Not a single Whitehall department has registered the use of artificial intelligence systems since the government said it would become mandatory, prompting warnings that the public sector is "flying blind" about the deployment of algorithmic technology affecting millions of lives. AI is already being used by government to inform decisions on everything from benefit payments to immigration enforcement, and records show public bodies have awarded dozens of contracts for AI and algorithmic services. A contract for facial recognition software, worth up to 20m, was put up for grabs last week by a police procurement body set up by the Home Office, reigniting concerns about "mass biometric surveillance". But details of only nine algorithmic systems have so far been submitted to a public register, with none of a growing number of AI programs used in the welfare system, by the Home Office or by the police among them. The dearth of information comes despite the government announcing in February this year that the use of the AI register would now be "a requirement for all government departments".


Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization

Mokander, Jakob, Schroeder, Ralph

arXiv.org Artificial Intelligence

The use of artificial intelligence (AI) in the public sector is best understood as a continuation and intensification of long standing rationalization and bureaucratization processes. Drawing on Weber, we take the core of these processes to be the replacement of traditions with instrumental rationality, i.e., the most calculable and efficient way of achieving any given policy objective. In this article, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end, reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, it also highlights that AI driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens sense of their noninstrumental obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic. Science can only provide the means, they cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny and debate.


TechScape: Here's four ways a new Labour government could use tech to boost Britain

The Guardian

Barring an asteroid strike, Keir Starmer is going to be the UK prime minister in three days. Given the lead in polling, I'd probably bet on him over an asteroid, too. Labour will come into government with a broken state, a flatlining economy and no money. A thin manifesto and enormous parliamentary majority means the party will almost certainly end up stretching further afield for ideas about how to deal with that trilemma from hell. So let's try and offer some.